Goto

Collaborating Authors

 learning fairness


Learning Fairness in Multi-Agent Systems

Neural Information Processing Systems

Fairness is essential for human society, contributing to stability and productivity. Similarly, fairness is also the key for many multi-agent systems. Taking fairness into multi-agent learning could help multi-agent systems become both efficient and stable. However, learning efficiency and fairness simultaneously is a complex, multi-objective, joint-policy optimization. To tackle these difficulties, we propose FEN, a novel hierarchical reinforcement learning model. We first decompose fairness for each agent and propose fair-efficient reward that each agent learns its own policy to optimize. To avoid multi-objective conflict, we design a hierarchy consisting of a controller and several sub-policies, where the controller maximizes the fair-efficient reward by switching among the sub-policies that provides diverse behaviors to interact with the environment. FEN can be trained in a fully decentralized way, making it easy to be deployed in real-world applications. Empirically, we show that FEN easily learns both fairness and efficiency and significantly outperforms baselines in a variety of multi-agent scenarios.


Reviews: Learning Fairness in Multi-Agent Systems

Neural Information Processing Systems

The authors propose a Fair-Efficient Network to better to train decentralized multi-agent reinforcement learning systems in tasks that involve resource allocation. In particular they introduce a shaping reward and a hierarchical model which they train with PPO on three new reinforcement learning environments (the code of which is made available). Their model outperforms several baselines, and ablation studies demonstrate the usefulness of the hierarchical nature of the model. The aims of the work are clear and well-stated. However, there are significant omissions in the review of related literature.


Reviews: Learning Fairness in Multi-Agent Systems

Neural Information Processing Systems

There was general consensus amongst the reviewers that this paper is well written and presents some interesting and novel ideas w.r.t. There were quite some concerns though initially w.r.t. The rebuttal has brought a lot of clarity w.r.t all those identified issues which has lead to general agreement in the discussion of the paper that this work is worthy of publication at NeurIPS. It's important though that the authors do take care of including the promised missing details and extended description of related work in the crc of their article.


Learning Fairness in Multi-Agent Systems

Neural Information Processing Systems

Fairness is essential for human society, contributing to stability and productivity. Similarly, fairness is also the key for many multi-agent systems. Taking fairness into multi-agent learning could help multi-agent systems become both efficient and stable. However, learning efficiency and fairness simultaneously is a complex, multi-objective, joint-policy optimization. To tackle these difficulties, we propose FEN, a novel hierarchical reinforcement learning model.


Learning Fairness in Multi-Agent Systems

Jiang, Jiechuan, Lu, Zongqing

Neural Information Processing Systems

Fairness is essential for human society, contributing to stability and productivity. Similarly, fairness is also the key for many multi-agent systems. Taking fairness into multi-agent learning could help multi-agent systems become both efficient and stable. However, learning efficiency and fairness simultaneously is a complex, multi-objective, joint-policy optimization. To tackle these difficulties, we propose FEN, a novel hierarchical reinforcement learning model.